Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Dimensional analysis of cutting edges of acetabular reamer based on 3D point cloud processing
Guowei YANG, Qifan CHEN, Xinyue LIU, Xiaoyang WANG
Journal of Computer Applications    2024, 44 (1): 285-291.   DOI: 10.11772/j.issn.1001-9081.2023010033
Abstract107)   HTML0)    PDF (8674KB)(57)       Save

Acetabular reamer is one of the most important surgical tools in hip replacement surgery. The milling quality of acetabular reamer on acetabulum is affected by the dimension change of cutting edges. The wear of acetabular reamer can be examined by processing 3D point cloud of acetabular reamer, so a dimensional analysis algorithm for the cutting edges of acetabular reamer based on 3D point cloud processing was proposed. Frist, an algorithm with tangency plane and maximum angle criterion were introduced in the proposed algorithm to obtain the boundary point cloud of acetabular reamer based on boundary characteristics of the tooth holes. Second, the boundary point cloud was partitioned into individual tooth hole point clouds by K-means clustering algorithm, and then the point cloud of each tooth hole boundary was searched by radius nearest neighbor search algorithm to obtain the point cloud of cutting edges belonging to different tooth holes. Finally, RANSAC (RANdom SAmple Consensus algorithm was used to fit the point cloud of acetabular reamer to a sphere, and Euclidean distance from the point cloud of cutting edges to the center of the fitted sphere was calculated to analyze cutting edge dimensions of acetabular reamer. PCL Point Cloud Library) was used as a development framework to process the point cloud of acetabular reamer. The accuracy of hole segmentation of the point cloud of acetabular reamer is 100%, and the accuracy of spherical fitting radius of the point cloud of the acetabular reamer is 0.004 mm. Experimental results show that the proposed algorithm has a good effect on the point cloud processing of acetabular reamer, and can effectively realize the dimensional analysis of the cutting edges of acetabular reamer.

Table and Figures | Reference | Related Articles | Metrics
Cross-corpus speech emotion recognition based on decision boundary optimized domain adaptation
Yang WANG, Hongliang FU, Huawei TAO, Jing YANG, Yue XIE, Li ZHAO
Journal of Computer Applications    2023, 43 (2): 374-379.   DOI: 10.11772/j.issn.1001-9081.2021122043
Abstract304)   HTML16)    PDF (3084KB)(124)       Save

Domain adaptation algorithms are widely used for cross-corpus speech emotion recognition. However, many domain adaptation algorithms lose the discrimination of target domain samples while pursuing the minimization of domain discrepancy, resulting in their presence at the decision boundary of the model in a high-density form, which degrades the performance of the model. Based on the above problem, a Decision Boundary Optimized Domain Adaptation (DBODA) method based cross-corpus speech emotion recognition was proposed. Firstly, the features were processed by using convolutional neural networks. Then, the features were fed into the Maximum Nuclear-norm and Mean Discrepancy (MNMD) module to maximize the nuclear norm of the sentiment prediction probability matrix of the target domain while reducing the inter-domain discrepancy, thereby enhancing the discrimination of the target domain samples and optimize the decision boundary. In six sets of cross-corpus experiments set up on the basis of Berlin, eNTERFACE and CASIA speech databases, the average recognition accuracy of the proposed method is 1.68 to 11.01 percentage points ahead of those of the other algorithms, indicating that the proposed model effectively reduces the sample density around the decision boundary and improves the prediction accuracy.

Table and Figures | Reference | Related Articles | Metrics
Semantic segmentation of point cloud scenes based on multi-feature fusion
Wen HAO, Yang WANG, Hainan WEI
Journal of Computer Applications    2023, 43 (10): 3202-3208.   DOI: 10.11772/j.issn.1001-9081.2023020119
Abstract218)   HTML11)    PDF (2678KB)(108)       Save

In order to mine the semantic relationships and spatial distribution among features, and further improve the semantic segmentation results of point cloud through multi-feature enhancement, a Multi-Feature Fusion based point cloud scene semantic segmentation Network (MFF-Net) was proposed. In the proposed network, the 3D coordinates and improved edge features were used as input, firstly, the neighbor points of the point were searched by using K-Nearest Neighbor (KNN) algorithm, and the geometric offsets were calculated based on 3D coordinates and coordinate differences among neighbor points, which enhanced the local geometric feature representation of points. Secondly, the distance between the central point and its neighbor points were used to as weighting information to update the edge features, and the spatial attention mechanism was introduced to obtain the semantic information among features. Thirdly, the spatial distribution information among features was further extracted by calculating the differences among neighbor features and using mean pooling operation. Finally, the trilateral features were fused based on attention pooling. Experimental results demonstrate that on S3DIS (Stanford 3D large-scale Indoor Spaces) dataset, the mean Intersection over Union (mIoU) of the proposed network is 67.5%, and the Overall Accuracy (OA) of the proposed network is 87.2%. These two values are 10.2 and 3.4 percentage points higher than those of PointNet++ respectively. It can be seen that MFF-Net can achieve good segmentation results in both large indoor and outdoor scenes.

Table and Figures | Reference | Related Articles | Metrics
Cloud computing task scheduling based on orthogonal adaptive whale optimization
Jinquan ZHANG, Shouwei XU, Xincheng LI, Chongyang WANG, Jingzhi XU
Journal of Computer Applications    2022, 42 (5): 1516-1523.   DOI: 10.11772/j.issn.1001-9081.2021050806
Abstract278)   HTML5)    PDF (1821KB)(76)       Save

Aiming at the problems such as long task completion time, high task execution cost and unbalanced system load in task scheduling, a new cloud computing task scheduling method based on Orthogonal Adaptive Whale Optimization Algorithm (OAWOA) was proposed. Firstly, the Orthogonal Experimental Design (OED) was applied to the population initialization and global search stages to improve and maintain the population diversity, avoid the algorithm from falling into local convergence too early. Then, the adaptive exponential decline factor and bidirectional search mechanism were used to further strengthen the global search ability of the algorithm. Finally, the fitness function was optimized to enable the algorithm to achieve multi-objective optimization. Through the simulation experiments, the proposed algorithm was compared with Whale Optimization Algorithm (WOA), Particle Swarm Optimization (PSO) algorithm, Bat Algorithm (BA) and two other improved WOAs. Experimental results show that, when the task scale is 50 and 500, the proposed algorithm achieves better convergence effect, has the total time and total cost of the obtained system executing tasks lower than those of other algorithms, and has the load balancing degree only lower than that of BA. In conclusion, the proposed algorithm shows significant advantages in reducing the total time and cost of system executing tasks and improving the system load balancing.

Table and Figures | Reference | Related Articles | Metrics
Text segmentation model based on graph convolutional network
Yuqi DU, Jin ZHENG, Yang WANG, Cheng HUANG, Ping LI
Journal of Computer Applications    2022, 42 (12): 3692-3699.   DOI: 10.11772/j.issn.1001-9081.2021101768
Abstract443)   HTML24)    PDF (2746KB)(212)       Save

The main task of text segmentation is to divide the text into several relatively independent text blocks according to the topic relevance. Aiming at the shortcomings of the existing text segmentation models in extracting fine-grained features such as text paragraph structural information, semantic correlation and context interaction, a text segmentation model TS-GCN (Text Segmentation-Graph Convolutional Network) based on Graph Convolutional Network (GCN) was proposed. Firstly, a text graph based on the structural information and semantic logic of text paragraphs was constructed. Then, the semantic similarity attention was introduced to capture the fine-grained correlation between text paragraph nodes, and the information transmission between high-order neighborhoods of text paragraph nodes was realized with the help of GCN, so that the model ability of multi-granularity extraction of text paragraph topic feature representations was enhanced. The proposed model was compared with the representative model CATS (Coherence-Aware Text Segmentation), and its basic model TLT-TS (Two-Level Transformer model for Text Segmentation), which were commonly used as benchmarks for text segmentation task. Experimental results show that TS-GCN’s evaluation index Pk is 0.08 percentage points lower than that of TLT-TS without any auxiliary module on Wikicities dataset. And the proposed model has the Pk value decreased by 0.38 percentage points and 2.30 percentage points respectively on Wikielements dataset compared with CATS and TLT-TS. It can be seen that TS-GCN achieves good segmentation effect.

Table and Figures | Reference | Related Articles | Metrics
Parallel pivoted subgraph matching with multiple coding trees on GPU
Yang WANG, Shijie JIANG, Yucong CAO, Chuanwen LI
Journal of Computer Applications    2022, 42 (1): 132-139.   DOI: 10.11772/j.issn.1001-9081.2021071219
Abstract216)   HTML7)    PDF (877KB)(55)       Save

The subgraph isomorphism problem is a Non-deterministic Polynomial (NP)-complete problem, and the pivoted subgraph isomorphism is a special subgraph isomorphism problem. There are many existing efficient subgraph isomorphism algorithms, but there is no GPU-based search algorithm for the pivoted subgraph isomorphism problem at present, and a large number of unnecessary intermediate results will be generated when the pivoted subgraph matching problem is solved by the existing subgraph isomorphism algorithms. Therefore, a GPU-based pivoted subgraph isomorphism algorithm was proposed. Firstly, through a novel coding tree method, nodes were encoded by the combination of node labels, degrees and the structural features of node neighbors. And the query graph nodes were pruned on GPU in parallel, so that the size of search space tree generated by the data graph candidate nodes was significantly reduced. Then, the candidate nodes of the query graph node were visited level by level, and the unsatisfied nodes were filtered out. Finally, the obtained subgraph was verified whether it was an isomorphic subgraph of the query graph, and the search of pivoted subgraph isomorphism was realized efficiently. Experimental results show that compared with GPU-friendly Subgraph Matching (GpSM) algorithm, the proposed algorithm has the execution time reduced by one-half, and the proposed algorithm can efficiently perform the pivoted subgraph isomorphism search with scalability. The proposed pivoted subgraph isomorphism algorithm can reduce the time required to solve the pivoted subgraph isomorphism problem, while reducing GPU memory consumption and improving the performance of algorithm.

Table and Figures | Reference | Related Articles | Metrics
Density clustering based removal heuristic for vehicle routing problem
YANG Wang, HE Guochao, WU Yan
Journal of Computer Applications    2017, 37 (8): 2387-2394.   DOI: 10.11772/j.issn.1001-9081.2017.08.2387
Abstract616)      PDF (1337KB)(669)       Save
Focusing on large-scale vehicle routing problem with heterogeneous fleet, a new neighborhood mapping method, namely density clustering based removal heuristic algorithm, was proposed under the Adaptive Large Neighborhood Search (ALNS) frame work. ALNS includes two phases:destruction and reconstruction, which provides optimized solution by destroying and reconstructing current solution. In the destruction phase, a routine was randomly selected to get clusters by density clustering, and then the stores were removed from the routine according to the clusters. In reconstruction, Greedy or Regret-2 insert algorithm was randomly chosen to place those removed stores into proper routine. Test results on benchmark instances validate the effectiveness of the proposed method. Compared with other existing algorithms, the ALNS density clustering based removal heuristic algorithm has lower rate of error and better quality of solutions; in real situations, the proposed algorithm can provide optimized solution in limited time.
Reference | Related Articles | Metrics
Survey on construction of measurement matrices in compressive sensing
WANG Qiang, ZHANG Peilin, WANG Huaiguang, YANG Wangcan, CHEN Yanlong
Journal of Computer Applications    2017, 37 (1): 188-196.   DOI: 10.11772/j.issn.1001-9081.2017.01.0188
Abstract734)      PDF (1425KB)(957)       Save
The construction of measurement matrix in compressive sensing varies widely and is on the development constantly. In order to sort out the research results and acquire the development trend of measurement matrix, the process of measurement matrix construction was introduced systematically. Firstly, compared with the traditional signal acquisition theory, the advantages of high resource utilization and small storage space were expounded. Secondly, on the basis of the framework of compressive sensing and focusing on four aspects:the construction principle, the generation method, the structure design of measurement matrix and the optimal method, the construction of measurement matrix in compressive sensing was summarized, and advantages of different principles, generations and structures were introduced in detail. Finally, based on the research results, the development directions of measurement matrix were prospected.
Reference | Related Articles | Metrics
Brain tumor segmentation based on morphological multi-scale modification
WAN Shengyang WANG Xiaopeng HE Shihe WANG Chengyi
Journal of Computer Applications    2014, 34 (2): 593-596.  
Abstract442)      PDF (626KB)(417)       Save
As many methods of brain tumor Magnetic Resonance Imaging (MRI) segmentation are usually driven by such conditions as noise, intensity inhomogeneity within tumor, fuzzy and discontinuous boundaries, it is difficult to segment tumor accurately. To improve the segmentation results, morphological multiscale modification of controlled marker was proposed. Firstly, this method was based on morphological gradient images because the adaptive structure elements were utilized on different pixels in different areas. In addition, modifying gradient image was key to avoid a larger misregistration of target boundaries. Finally, marker-controlled watershed was applied to segment brain tumor. The experimental results show that the method of brain tumors has more accurate segmentation results. Key words:brain tumor; morphological multi-scale modification; watershed transform
Related Articles | Metrics
Fast image registration algorithm based on locally significant edge feature
YANG Jian LI Ruonan HUANG Chenyang WANG Gang DING Chuang
Journal of Computer Applications    2014, 34 (1): 149-153.   DOI: 10.11772/j.issn.1001-9081.2014.01.0149
Abstract621)      PDF (889KB)(677)       Save
Considering that the Scale Invariant Feature Transform (SIFT) algorithm extracts a great number of feature points, consumes a lot of matching time but with low matching accuracy, a fast image registration algorithm based on local significant edge features was proposed. Then SIFT algorithm was used to extract feature points, while wavelet edge detection was also used to extract image edge to establish feature points around the edge of the neighborhood characteristics, which filtered out points with a significant edge feature characteristic as significant feature points. A feature vector was formed by the shape-context operator and edge features. Euclidean distance was used as the match metric function to preliminarily match the feature points extracted from different images. Afterwards, RANdom SAmple Consensus (RANSAC) algorithm was applied to eliminate the mismatching points. The experimental results show that the algorithm effectively controlled the number of feature points, improved qulity of the feature points, reduced the feature search space and enhanced the efficiency of the feature matching.
Related Articles | Metrics
Spatial data visualization based on cluster analysis
ZHANG Yang WANG Chen
Journal of Computer Applications    2013, 33 (10): 2981-2983.  
Abstract683)      PDF (695KB)(993)       Save
Firstly, the paper introduced the researches and basic methods of spatial data visualization technology, and analyzed two common kinds of methods, namely entity-based and region-based. A clustering-based spatial data visualization method was proposed, which firstly made a cluster analysis of spatial data and got the description parameters of the result through the use of spatial clustering algorithms represented by algorithm ASCDT (Adaptive Spatial Clustering algorithm based on Delaunay Triangulation). Secondly, it designed visual objects aimed at the cluster result by combining the basic visualization methods and the characteristics of the parameters. As a result, the mapping relationship was established. Finally, some issues that needed to be further studied and improved were discussed.
Related Articles | Metrics
Multi-objective coverage control in wireless sensor network based on Poisson distribution
XU Yixin BAI Yan ZHAO Tianyang WANG Renshu
Journal of Computer Applications    2013, 33 (07): 1820-1824.   DOI: 10.11772/j.issn.1001-9081.2013.07.1820
Abstract1076)      PDF (952KB)(644)       Save
A multi-objective optimization coverage control was proposed for solving the intractable problem of k-coverage rate, energy consumption and reliability in wireless sensor networks on the assumption that nodes are in Poisson distribution. In order to overcome the shortcomings of population initialization,parameter control and population maintenance in multi-objective differential evolution algorithm,the author designed tactics of swarm orthogonal initialization, parameter self-adaptive control and dynamic swarm maintenance strategy separately, and an improved multi-objective differential evolutional algorithm (I-DEMO) was proposed to solve this model. The results show that the control strategy can effectively achieve the three-coverage rate of 81.2%, reduce the energy consumption effectively, and ensure the reliability. This algorithm can dominate 76% Pareto fronts of the traditional algorithm and be applied to the solution of other multi-objective problems.
Reference | Related Articles | Metrics
Simulation of knowledge diffusion among enterprise clusters based on cognitive perspective
NIU Jia-yang WANG Hong-guo SHAO Zeng-zhen SONG Chao-chao
Journal of Computer Applications    2012, 32 (10): 2879-2883.   DOI: 10.3724/SP.J.1087.2012.02879
Abstract782)      PDF (801KB)(402)       Save
In order to overcome the deficiency of the previous research on knowledge structures of enterprises, this paper proposed the knowledge diffusion model among enterprise clusters based on cognitive perspective Clusters of Enterprise Knowledge Transmission Model (CEKTM). The model proposed the definition of enterprise chromosome based on cognitive perspective and learned from ant colony path selection strategy, giving the learning strategies and knowledge interactive mechanism. The simulation results indicate that the knowledge structure within a cluster has cognitive proximity principle, and the similar knowledge structure has the higher knowledge transfer probability; too high or low enterprise chromosome dimension is not good to the improvement of average knowledge level for enterprise cluster; at the initial period, the small market environment is in favor of enterprise cluster development.
Reference | Related Articles | Metrics
Semi-fragile watermarking algorithm based on dynamic image segmentation and information entropy
Hai-yang WANG Sheng-bing CHE Xu SHU
Journal of Computer Applications    2011, 31 (08): 2169-2173.   DOI: 10.3724/SP.J.1087.2011.02169
Abstract1409)      PDF (922KB)(811)       Save
Most of the existing semi-fragile watermarking algorithms adopt the means of double-step fixed quantization, do not consider the attack characteristics for carrier image, and only divide the original image into smooth region and texture region, so that the robustness of watermarking has reached a bottleneck. To improve the robustness of watermarking further, the authors proposed a new semi-fragile watermarking algorithm based on the technique of dynamic image region segmentation and information entropy. The technique of dynamic image region segmentation divided an image into several embedding regions, and determined the strength of every embedding region; a quantization algorithm based on entropy introduced information entropy into the quantization algorithm and could effectively measure the amount of sensitive information carried by different embedding regions. The experimental results show that, compared with the existing semi-fragile watermarking algorithm, the proposed algorithm has better masking performance and stronger robustness.
Reference | Related Articles | Metrics
Trust-based authentication routing protocol for satellite network
PAN Yan-hui WANG Tao WU Yang WANG Wen-hao
Journal of Computer Applications    2011, 31 (03): 781-783.   DOI: 10.3724/SP.J.1087.2011.00781
Abstract1652)      PDF (488KB)(976)       Save
Security routing protocol is a key element to guarantee satellite network security. To solve the problem that most of routing protocols lack security scheme, the Elliptic Curve Pintsov-Vanstone Signature Scheme (ECPVSS) was used to attain confidentiality and authentication of packets, and trust evaluation scheme could exclude internal malicious node from the route path. Then a security routing protocol oriented to High Altitude Platform (HAP)/Low Earth Orbit (LEO) architecture was formed. The analysis results show that the proposed protocol can prevent network from some common routing attacks.
Related Articles | Metrics
Query expansion based on user relevance feedback and ontology
Xu-yang WANG
Journal of Computer Applications   
Abstract1776)      PDF (513KB)(1581)       Save
A novel method for Query Expansion (QE) was presented in this paper. The proposed method was a hybrid QE technology that combined user relevance feedback and ontology. The proposed method had two significant novelties: it combined user relevance feedback with ontology and used FirteX as the experimental platform. The proposed method was compared with cosine similarity-based QE that was a widely used query expansion technique. The experimental results show that the proposed method outperforms cosine similarity-based QE 15% and 13% in terms of average precision and average recall and has an improvement of 16% in F-measure.
Related Articles | Metrics
Secure and efficient WTLS handshake protocol
Wen XIANG Liang-sheng TAO Tong-yang WANG
Journal of Computer Applications   
Abstract1657)      PDF (575KB)(1171)       Save
The primary goal of Wireless Transport Layer Security (WTLS) is to provide the security of the wireless communication. An efficient WTLS handshake protocol would speed up establishing the session between the client and the server, and at the same time reduce the spending of resource on the mobile. A new WTLS handshake protocol called Light online compute WTLS (L-WTLS) was proposed. L-WTLS was much more economic since it had less computation, especially the online ones. At last, that the L-WTLS is more suitable for the low power mobile terminal is proved.
Related Articles | Metrics
Design and implementation of malwebpage detection system based on honeyclient
Xiao-Yan SUN Yang WANG Yue-fei ZHU Dong-ying WU
Journal of Computer Applications   
Abstract1423)      PDF (639KB)(1497)       Save
Now, Internet Explorer is the most popular client software which malwares often use. In allusion to such threats to Internet Explorer, the characters of Web attack were analyzed, and a malwebpage detection system based on honeyclient was designed, in which spider was combined with honeypot. In the system, spider was used to collect source of urls, then clientengine automatically created Internet Explorer processes, and devicedriven detector was used to detect malwares coming through Internet Explorer. In the end, the malicious webpages url was added to the black list and the malware database was enlarged.
Related Articles | Metrics